# Long-text processing
Mistral Small 3.1 24B Instruct 2503 Quantized.w8a8
Apache-2.0
This is an INT8-quantized Mistral-Small-3.1-24B-Instruct-2503 model, optimized by Red Hat and Neural Magic, suitable for fast response and low-latency scenarios.
M
RedHatAI
833
2
Mistral Small 3.1 24B Instruct 2503 FP8 Dynamic
Apache-2.0
This is a 24B-parameter conditional generation model based on the Mistral3 architecture, optimized with FP8 dynamic quantization, suitable for multilingual text generation and visual understanding tasks.
M
RedHatAI
2,650
5
Yi 1.5 9B
Apache-2.0
Yi-1.5 is an upgraded version of the Yi model, excelling in programming, mathematics, reasoning, and instruction-following capabilities while maintaining excellent language understanding, commonsense reasoning, and reading comprehension.
Large Language Model
Transformers

Y
01-ai
6,140
48
Lingowhale 8B
A Chinese-English bilingual large language model jointly open-sourced by DeepLang Tech and Tsinghua NLP Lab, pre-trained on trillions of high-quality tokens with 8K context window processing capability
Large Language Model
Transformers Supports Multiple Languages

L
deeplang-ai
98
21
Llama 65b Instruct
A 65B-parameter instruction-tuned large language model developed by Upstage based on the LLaMA architecture, supporting long-text processing
Large Language Model
Transformers English

L
upstage
144
14
Xlm Roberta Large Fa Qa
A Persian Q&A model based on the RoBERTa architecture, optimized for Persian Q&A tasks
Question Answering System
Transformers

X
SajjadAyoubi
141
7
Bart Squad2
BART-based extractive QA model trained on Squad 2.0 dataset with an F1 score of 87.4
Question Answering System
Transformers English

B
primer-ai
18
2
German Question Answer Electra
A German question-answering model fine-tuned based on the GELECTRA Large architecture, excelling on German MLQA and XQUAD datasets
Question Answering System
Transformers German

G
Sahajtomar
19
7
Featured Recommended AI Models